New! Sign up for our free email newsletter.
Science News
from research organizations

Early endeavors on the path to reliable quantum machine learning

Date:
June 8, 2021
Source:
ETH Zurich
Summary:
The future quantum computers should be capable of super-fast and reliable computation. Today, this is still a major challenge. Now, computer scientists conduct an early exploration for reliable quantum machine learning.
Share:
FULL STORY

Anyone who collects mushrooms knows that it is better to keep the poisonous and the non-poisonous ones apart. Not to mention what would happen if someone ate the poisonous ones. In such "classification problems," which require us to distinguish certain objects from one another and to assign the objects we are looking for to certain classes by means of characteristics, computers can already provide useful support to humans.

Intelligent machine learning methods can recognise patterns or objects and automatically pick them out of data sets. For example, they could pick out those pictures from a photo database that show non-toxic mushrooms. Particularly with very large and complex data sets, machine learning can deliver valuable results that humans would not be able to find out, or only with much more time. However, for certain computational tasks, even the fastest computers available today reach their limits. This is where the great promise of quantum computers comes into play: that one day they will also perform super-fast calculations that classical computers cannot solve in a useful period of time.

The reason for this "quantum supremacy" lies in physics: quantum computers calculate and process information by exploiting certain states and interactions that occur within atoms or molecules or between elementary particles.

The fact that quantum states can superpose and entangle creates a basis that allows quantum computers the access to a fundamentally richer set of processing logic. For instance, unlike classical computers, quantum computers do not calculate with binary codes or bits, which process information only as 0 or 1, but with quantum bits or qubits, which correspond to the quantum states of particles. The crucial difference is that qubits can realise not only one state -- 0 or 1 -- per computational step, but also a state in which both superpose. These more general manners of information processing in turn allow for a drastic computational speed-up in certain problems.

Translating classical wisdom into the quantum realm

These speed advantages of quantum computing are also an opportunity for machine learning applications -- after all, quantum computers could compute the huge amounts of data that machine learning methods need to improve the accuracy of their results much faster than classical computers.

However, to really exploit the potential of quantum computing, one has to adapt the classical machine learning methods to the peculiarities of quantum computers. For example, the algorithms, i.e. the mathematical calculation rules that describe how a classical computer solves a certain problem, must be formulated differently for quantum computers. Developing well-functioning "quantum algorithms" for machine learning is not entirely trivial, because there are still a few hurdles to overcome along the way.

On the one hand, this is due to the quantum hardware. At ETH Zurich, researchers currently have quantum computers that work with up to 17 qubits (see "ETH Zurich and PSI found Quantum Computing Hub" of 3 May 2021). However, if quantum computers are to realise their full potential one day, they might need thousands to hundreds of thousands of qubits.

Quantum noise and the inevitability of errors

One challenge that quantum computers face concerns their vulnerability to error. Today's quantum computers operate with a very high level of "noise," as errors or disturbances are known in technical jargon. For the American Physical Society, this noise is " the major obstacle to scaling up quantum computers." No comprehensive solution exists for both correcting and mitigating errors. No way has yet been found to produce error-free quantum hardware, and quantum computers with 50 to 100 qubits are too small to implement correction software or algorithms.

To a certain extent, one has to live with the fact that errors in quantum computing are in principle unavoidable, because the quantum states on which the concrete computational steps are based can only be distinguished and quantified with probabilities. What can be achieved, on the other hand, are procedures that limit the extent of noise and perturbations to such an extent that the calculations nevertheless deliver reliable results. Computer scientists refer to a reliably functioning calculation method as "robust" and in this context also speak of the necessary "error tolerance."

This is exactly what the research group led by Ce Zhang, ETH computer science professor and member of the ETH AI Center, has has recently explored, somehow "accidentally" during an endeavor to reason about the robustness of classical distributions for the purpose of building better machine learning systems and platforms. Together with Professor Nana Liu from Shanghai Jiao Tong University and with Professor Bo Li from the University of Illinois at Urbana, they have developed a new approach. This allows them to prove the robustness conditions of certain quantum-based machine learning models, for which the quantum computation is guaranteed to be reliable and the result to be correct. The researchers have published their approach, which is one of the first of its kind, in the scientific journal npj Quantum Information.

Protection against errors and hackers

"When we realised that quantum algorithms, like classical algorithms, are prone to errors and perturbations, we asked ourselves how we can estimate these sources of errors and perturbations for certain machine learning tasks, and how we can guarantee the robustness and reliability of the chosen method," says Zhikuan Zhao, a postdoc in Ce Zhang's group. "If we know this, we can trust the computational results, even if they are noisy."

The researchers investigated this question using quantum classification algorithms as an example -- after all, errors in classification tasks are tricky because they can affect the real world, for example if poisonous mushrooms were classified as non-toxic. Perhaps most importantly, using the theory of quantum hypothesis testing -- inspired by other researchers' recent work in applying hypothesis testing in the classical setting -- which allows quantum states to be distinguished, the ETH researchers determined a threshold above which the assignments of the quantum classification algorithm are guaranteed to be correct and its predictions robust.

With their robustness method, the researchers can even verify whether the classification of an erroneous, noisy input yields the same result as a clean, noiseless input. From their findings, the researchers have also developed a protection scheme that can be used to specify the error tolerance of a computation, regardless of whether an error has a natural cause or is the result of manipulation from a hacking attack. Their robustness concept works for both hacking attacks and natural errors.

"The method can also be applied to a broader class of quantum algorithms," says Maurice Weber, a doctoral student with Ce Zhang and the first author of the publication. Since the impact of error in quantum computing increases as the system size rises, he and Zhao are now conducting research on this problem. "We are optimistic that our robustness conditions will prove useful, for example, in conjunction with quantum algorithms designed to better understand the electronic structure of molecules."


Story Source:

Materials provided by ETH Zurich. Original written by Florian Meyer. Note: Content may be edited for style and length.


Journal Reference:

  1. Maurice Weber, Nana Liu, Bo Li, Ce Zhang, Zhikuan Zhao. Optimal provable robustness of quantum classification via quantum hypothesis testing. npj Quantum Information, 2021; 7 (1) DOI: 10.1038/s41534-021-00410-5

Cite This Page:

ETH Zurich. "Early endeavors on the path to reliable quantum machine learning." ScienceDaily. ScienceDaily, 8 June 2021. <www.sciencedaily.com/releases/2021/06/210608083951.htm>.
ETH Zurich. (2021, June 8). Early endeavors on the path to reliable quantum machine learning. ScienceDaily. Retrieved October 31, 2024 from www.sciencedaily.com/releases/2021/06/210608083951.htm
ETH Zurich. "Early endeavors on the path to reliable quantum machine learning." ScienceDaily. www.sciencedaily.com/releases/2021/06/210608083951.htm (accessed October 31, 2024).

Explore More

from ScienceDaily

RELATED STORIES